Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
                                            Some full text articles may not yet be available without a charge during the embargo (administrative interval).
                                        
                                        
                                        
                                            
                                                
                                             What is a DOI Number?
                                        
                                    
                                
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
- 
            Gaussian Mixture Models (GMM) are an effective representation of resource uncertainty in power systems planning, as they can be tractably incorporated within stochastic optimization models. However, the skewness, multimodality, and bounded physical support of long-term wind power forecasts can entail requiring a large number of mixture components to achieve a good fit, leading to complex optimization problems. We propose a probabilistic model for wind generation uncertainty to address this challenge, termed Discrete-Gaussian Mixture Model (DGMM), that combines continuous Gaussian components with discrete masses. The model generalizes classical GMMs that have been widely used to estimate wind power outputs. We employ a modified Expectation-Maximization algorithm (called FixedEM) to estimate the parameters of the DGMM. We provide empirical results on the ACTIVSg2000 synthetic wind generation dataset, where we demonstrate that the fitted DGMM is capable of capturing the high frequencies of time windows when wind generating units are either producing at maximum capacity or not producing any power at all. Furthermore, we find that the Bayesian Information Criterion of the DGMM is significantly lower compared to that of existing GMMs using the same number of Gaussian components. This improvement is particularly advantageous when the allowed number of Gaussian components is limited, facilitating the efficient solution to optimization problems for long-term planning.more » « less
- 
            We propose an algorithm to solve convex and concave fractional programs and their stochastic counterparts in a common framework. Our approach is based on a novel reformulation that involves differences of square terms in the constraints, and subsequent employment of piecewise-linear approximations of the concave terms. Using the branch-and-bound (B\&B) framework, our algorithm adaptively refines the piecewise-linear approximations and iteratively solves convex approximation problems. The convergence analysis provides a bound on the optimality gap as a function of approximation errors. Based on this bound, we prove that the proposed B\&B algorithm terminates in a finite number of iterations and the worst-case bound to obtain an $$\epsilon$$-optimal solution reciprocally depends on the square root of $$\epsilon$$. Numerical experiments on Cobb-Douglas production efficiency and equitable resource allocation problems support that the algorithm efficiently finds a highly accurate solution while significantly outperforming the benchmark algorithms for all the small size problem instances solved. A modified branching strategy that takes the advantage of non-linearity in convex functions further improves the performance. Results are also discussed when solving a dual reformulation and using a cutting surface algorithm to solve distributionally robust counterpart of the Cobb-Douglas example models.more » « less
- 
            This paper develops a finite approximation approach to find a non-smooth solution of an integral equation of the second kind. The equation solutions with non-smooth kernel having a non-smooth solution have never been studied before. Such equations arise frequently when modeling stochastic systems. We construct a Banach space of (right-continuous) distribution functions and reformulate the problem into an operator equation. We provide general necessary and sufficient conditions that allow us to show convergence of the approximation approach developed in this paper. We then provide two specific choices of approximation sequences and show that the properties of these sequences are sufficient to generate approximate equation solutions that converge to the true solution assuming solution uniqueness and some additional mild regularity conditions. Our analysis is performed under the supremum norm, allowing wider applicability of our results. Worst-case error bounds are also available from solving a linear program. We demonstrate the viability and computational performance of our approach by constructing three examples. The solution of the first example can be constructed manually but demonstrates the correctness and convergence of our approach. The second application example involves stationary distribution equations of a stochastic model and demonstrates the dramatic improvement our approach provides over the use of computer simulation. The third example solves a problem involving an everywhere nondifferentiable function for which no closed-form solution is available.more » « less
- 
            We study the assignment problem with chance constraints (CAP) and its distributionally robust counterpart DR-CAP. We present a technique for estimating big-M in such a formulation that takes advantage of the ambiguity set. We consider a 0-1 bilinear knapsack set to develop valid inequalities for CAP and DR-CAP. This is generalized to the joint chance constraint problem. A probability cut framework is also developed to solve DR-CAP. A computational study on problem instances obtained from using real hospital surgery data shows that the developed techniques allow us to solve certain model instances and reduce the computational time for others. The use of Wasserstein ambiguity set in the DR-CAP model improves the out-of-sample performance of satisfying the chance constraints more significantly than the one possible by increasing the sample size in the sample average approximation technique. The solution time for DR-CAP model instances is of the same order as that for solving the CAP instances. This finding is important because chance constrained optimization models are very difficult to solve when the coefficients in the constraints are random.more » « less
- 
            We study the chance-constrained bin packing problem, with an application to hospital operating room planning. The bin packing problem allocates items of random sizes that follow a discrete distribution to a set of bins with limited capacity, while minimizing the total cost. The bin capacity constraints are satisfied with a given probability. We investigate a big-M and a 0-1 bilinear formulation of this problem. We analyze the bilinear structure of the formulation and use the lifting techniques to identify cover, clique, and projection inequalities to strengthen the formulation. We show that in certain cases these inequalities are facet-defining for a bilinear knapsack constraint that arises in the reformulation. An extensive computational study is conducted for the operating room planning problem that minimizes the number of open operating rooms. The computational tests are performed using problems generated based on real data from a hospital. A lower-bound improvement heuristic is combined with the cuts proposed in this paper in a branch-and-cut framework. The computations illustrate that the techniques developed in this paper can significantly improve the performance of the branch-and-cut method. Problems with up to 1,000 scenarios are solved to optimality in less than an hour. A safe approximation based on conditional value at risk (CVaR) is also solved. The computations show that the CVaR approximation typically leaves a gap of one operating room (e.g., six instead of five) to satisfy the chance constraint. Summary of Contribution: This paper investigates a branch-and-cut algorithm for a chance-constrained bin packing problem with multiple bins. The chance-constrained bin packing provides a modeling framework for applied operations research problems, such as health care, scheduling, and so on. This paper studies alternative computational approaches to solve this problem. Moreover, this paper uses real data from a hospital operating room planning setting as an application to test the algorithmic ideas. This work, therefore, is at the intersection of computing and operations research. Several interesting ideas are developed and studied. These include a strengthened big-M reformulation, analysis of a bilinear reformulation, and identifying certain facet-defining inequalities for this formulation. This paper also gives a lower-bound generation heuristic for a model that minimizes the number of bins. Computational experiments for an operating room planning model that uses data from a hospital demonstrate the computational improvement and importance of the proposed approaches. The techniques proposed in this paper and computational experiments further enhance the interface of computing and operations research.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
